Machine learning and computer vision have driven many of the greatestadvances in the modeling of Deep Convolutional Neural Networks (DCNNs).Nowadays, most of the research has been focused on improving recognitionaccuracy with better DCNN models and learning approaches. The recurrentconvolutional approach is not applied very much, other than in a few DCNNarchitectures. On the other hand, Inception-v4 and Residual networks havepromptly become popular among computer the vision community. In this paper, weintroduce a new DCNN model called the Inception Recurrent ResidualConvolutional Neural Network (IRRCNN), which utilizes the power of theRecurrent Convolutional Neural Network (RCNN), the Inception network, and theResidual network. This approach improves the recognition accuracy of theInception-residual network with same number of network parameters. In addition,this proposed architecture generalizes the Inception network, the RCNN, and theResidual network with significantly improved training accuracy. We haveempirically evaluated the performance of the IRRCNN model on differentbenchmarks including CIFAR-10, CIFAR-100, TinyImageNet-200, and CU3D-100. Theexperimental results show higher recognition accuracy against most of thepopular DCNN models including the RCNN. We have also investigated theperformance of the IRRCNN approach against the Equivalent Inception Network(EIN) and the Equivalent Inception Residual Network (EIRN) counterpart on theCIFAR-100 dataset. We report around 4.53%, 4.49% and 3.56% improvement inclassification accuracy compared with the RCNN, EIN, and EIRN on the CIFAR-100dataset respectively. Furthermore, the experiment has been conducted on theTinyImageNet-200 and CU3D-100 datasets where the IRRCNN provides better testingaccuracy compared to the Inception Recurrent CNN (IRCNN), the EIN, and theEIRN.
展开▼